173 research outputs found

    Granular fuzzy models: a study in knowledge management in fuzzy modeling

    Get PDF
    AbstractIn system modeling, knowledge management comes vividly into the picture when dealing with a collection of individual models. These models being considered as sources of knowledge, are engaged in some collective pursuits of a collaborative development to establish modeling outcomes of global character. The result comes in the form of a so-called granular fuzzy model, which directly reflects upon and quantifies the diversity of the available sources of knowledge (local models) involved in knowledge management. In this study, several detailed algorithmic schemes are presented along with related computational aspects associated with Granular Computing. It is also shown how the construction of information granules completed through the use of the principle of justifiable granularity becomes advantageous in the realization of granular fuzzy models and a quantification of the quality (specificity) of the results of modeling. We focus on the design of granular fuzzy models considering that the locally available models are those fuzzy rule-based. It is shown that the model quantified in terms of two conflicting criteria, that is (a) a coverage criterion expressing to which extent the resulting information granules “cover” include data and (b) specificity criterion articulating how detailed (specific) the obtained information granules are. The overall quality of the granular model is also assessed by determining an area under curve (AUC) where the curve is formed in the coverage-specificity coordinates. Numeric results are discussed with intent of displaying the most essential features of the proposed methodology and algorithmic developments

    Is ProtoPNet Really Explainable? Evaluating and Improving the Interpretability of Prototypes

    Full text link
    ProtoPNet and its follow-up variants (ProtoPNets) have attracted broad research interest for their intrinsic interpretability from prototypes and comparable accuracy to non-interpretable counterparts. However, it has been recently found that the interpretability of prototypes can be corrupted due to the semantic gap between similarity in latent space and that in input space. In this work, we make the first attempt to quantitatively evaluate the interpretability of prototype-based explanations, rather than solely qualitative evaluations by some visualization examples, which can be easily misled by cherry picks. To this end, we propose two evaluation metrics, termed consistency score and stability score, to evaluate the explanation consistency cross images and the explanation robustness against perturbations, both of which are essential for explanations taken into practice. Furthermore, we propose a shallow-deep feature alignment (SDFA) module and a score aggregation (SA) module to improve the interpretability of prototypes. We conduct systematical evaluation experiments and substantial discussions to uncover the interpretability of existing ProtoPNets. Experiments demonstrate that our method achieves significantly superior performance to the state-of-the-arts, under both the conventional qualitative evaluations and the proposed quantitative evaluations, in both accuracy and interpretability. Codes are available at https://github.com/hqhQAQ/EvalProtoPNet
    • …
    corecore